122 research outputs found

    Vulnerabilities of Electric Vehicle Battery Packs to Cyberattacks

    Full text link
    Electric Vehicles (EVs), like all modern vehicles, are entirely controlled by electronic devices embedded within networks that are exposed to the threat of cyberattacks. Cyber vulnerabilities are magnified with EVs due to unique risks associated with EV battery packs. Current batteries have well-known issues with specific energy, cost and fire-related safety risks. In this study, we develop a systematic framework to assess the impact of cyberattacks on EVs. While the current focus of automotive cyberattacks is on short-term physical safety, it is crucial to consider long-term cyberattacks that aim to cause financial losses through accrued impact, especially in the context of EVs. Faulty components of battery management systems such as a compromised voltage regulator could lead to cyberattacks that can overdischarge or overcharge the battery. Overdischarge could lead to failures such as internal shorts in the timescale of minutes through cyberattacks that compromise energy-intensive EV subsystems like auxiliary components. Attacks that overcharge the pack could shorten the lifetime of a new battery pack to less than a year. Further, such attacks also pose physical safety risks via the triggering of thermal (fire) events. Attacks on auxiliary components lead to battery drain, which could be up to 20% of the state-of-charge per hour. Lastly, we develop a heuristic for the stealthiness of a cyberattack to augment traditional threat models. The methodology presented here will help in building the foundational principles of electric vehicle cybersecurity: a nascent but critical topic in the coming years.Comment: 23 pages, 3 figures, 1 table, 9 pages of Supporting Informatio

    DDA: Cross-Session Throughput Prediction with Applications to Video Bitrate Selection

    Full text link
    User experience of video streaming could be greatly improved by selecting a high-yet-sustainable initial video bitrate, and it is therefore critical to accurately predict throughput before a video session starts. Inspired by previous studies that show similarity among throughput of similar sessions (e.g., those sharing same bottleneck link), we argue for a cross-session prediction approach, where throughput measured on other sessions is used to predict the throughput of a new session. In this paper, we study the challenges of cross-session throughput prediction, develop an accurate throughput predictor called DDA, and evaluate the performance of the predictor with real-world datasets. We show that DDA can predict throughput more accurately than simple predictors and conventional machine learning algorithms; e.g., DDA's 80%ile prediction error of DDA is > 50% lower than other algorithms. We also show that this improved accuracy enables video players to select a higher sustainable initial bitrate; e.g., compared to initial bitrate without prediction, DDA leads to 4x higher average bitrate

    A Framework to Quantify the Benefits of Network Functions Virtualization in Cellular Networks

    Full text link
    Network functions virtualization (NFV) is an appealing vision that promises to dramatically reduce capital and operating expenses for cellular providers. However, existing efforts in this space leave open broad issues about how NFV deployments should be instantiated or how they should be provisioned. In this paper, we present an initial attempt at a framework that will help network operators systematically evaluate the potential benefits that different points in the NFV design space can offer

    Accelerating the Development of Software-Defined Network Optimization Applications Using SOL

    Full text link
    Software-defined networking (SDN) can enable diverse network management applications such as traffic engineering, service chaining, network function outsourcing, and topology reconfiguration. Realizing the benefits of SDN for these applications, however, entails addressing complex network optimizations that are central to these problems. Unfortunately, such optimization problems require significant manual effort and expertise to express and non-trivial computation and/or carefully crafted heuristics to solve. Our vision is to simplify the deployment of SDN applications using general high-level abstractions for capturing optimization requirements from which we can efficiently generate optimal solutions. To this end, we present SOL, a framework that demonstrates that it is indeed possible to simultaneously achieve generality and efficiency. The insight underlying SOL is that SDN applications can be recast within a unifying path-based optimization abstraction, from which it efficiently generates near-optimal solutions, and device configurations to implement those solutions. We illustrate the generality of SOL by prototyping diverse and new applications. We show that SOL simplifies the development of SDN-based network optimization applications and provides comparable or better scalability than custom optimization solutions

    NetMemex: Providing Full-Fidelity Traffic Archival

    Full text link
    NetMemex explores efficient network traffic archival without any loss of information. Unlike NetFlow-like aggregation, NetMemex allows retrieving the entire packet data including full payload, which makes it useful in forensic analysis, networked and distributed system research, and network administration. Different from packet trace dumps, NetMemex performs sophisticated data compression for small storage space use and optimizes the data layout for fast query processing. NetMemex takes advantage of high-speed random access of flash drives and inexpensive storage space of hard disk drives. These efforts lead to a cost-effective yet high-performance full traffic archival system. We demonstrate that NetMemex can record full-fidelity traffic at near-Gbps rates using a single commodity machine, handling common queries at up to 90.1 K queries/second, at a low storage cost comparable to conventional hard disk-only traffic archival solutions.Comment: A reformatted version of the ACM SIGCOMM 2013 submissio

    Scalable Testing of Context-Dependent Policies over Stateful Data Planes with Armstrong

    Full text link
    Network operators today spend significant manual effort in ensuring and checking that the network meets their intended policies. While recent work in network verification has made giant strides to reduce this effort, they focus on simple reachability properties and cannot handle context-dependent policies (e.g., how many connections has a host spawned) that operators realize using stateful network functions (NFs). Together, these introduce new expressiveness and scalability challenges that fall outside the scope of existing network verification mechanisms. To address these challenges, we present Armstrong, a system that enables operators to test if network with stateful data plane elements correctly implements a given context-dependent policy. Our design makes three key contributions to address expressiveness and scalability: (1) An abstract I/O unit for modeling network I/O that encodes policy-relevant context information; (2) A practical representation of complex NFs via an ensemble of finite state machines abstraction; and (3) A scalable application of symbolic execution to tackle state space explosion. We demonstrate that Armstrong is several orders of magnitude faster than existing mechanisms

    A New Approach to DDoS Defense using SDN and NFV

    Full text link
    Networks today rely on expensive and proprietary hard- ware appliances, which are deployed at fixed locations, for DDoS defense. This introduces key limitations with respect to flexibility (e.g., complex routing to get traffic to these "chokepoints") and elasticity in handling changing attack patterns. We observe an opportunity to ad- dress these limitations using new networking paradigms such as software-defined networking (SDN) and network functions virtualization (NFV). Based on this observation, we design and implement of Bohatei, an elastic and flexible DDoS defense system. In designing Bohatei, we address key challenges of scalability, responsive- ness, and adversary-resilience. We have implemented defenses for several well-known DDoS attacks in Bohatei. Our evaluations show that Bohatei is scalable (handling 500 Gbps attacks), responsive (mitigating attacks within one minute), and resilient to dynamic adversaries

    Fighting Fire with Light: A Case for Defending DDoS Attacks Using the Optical Layer

    Full text link
    The DDoS attack landscape is growing at an unprecedented pace. Inspired by the recent advances in optical networking, we make a case for optical layer-aware DDoS defense (O-LAD) in this paper. Our approach leverages the optical layer to isolate attack traffic rapidly via dynamic reconfiguration of (backup) wavelengths using ROADMs---bridging the gap between (a) evolution of the DDoS attack landscape and (b) innovations in the optical layer (e.g., reconfigurable optics). We show that the physical separation of traffic profiles allows finer-grained handling of suspicious flows and offers better performance for benign traffic in the face of an attack. We present preliminary results modeling throughput and latency for legitimate flows while scaling the strength of attacks. We also identify a number of open problems for the security, optical, and systems communities: modeling diverse DDoS attacks (e.g., fixed vs. variable rate, detectable vs. undetectable), building a full-fledged defense system with optical advancements (e.g., OpenConfig), and optical layer-aware defenses for a broader class of attacks (e.g., network reconnaissance).Comment: 6 pages, 4 figure

    CARE: Content Aware Redundancy Elimination for Disaster Communications on Damaged Networks

    Full text link
    During a disaster scenario, situational awareness information, such as location, physical status and images of the surrounding area, is essential for minimizing loss of life, injury, and property damage. Today's handhelds make it easy for people to gather data from within the disaster area in many formats, including text, images and video. Studies show that the extreme anxiety induced by disasters causes humans to create a substantial amount of repetitive and redundant content. Transporting this content outside the disaster zone can be problematic when the network infrastructure is disrupted by the disaster. This paper presents the design of a novel architecture called CARE (Content-Aware Redundancy Elimination) for better utilizing network resources in disaster-affected regions. Motivated by measurement-driven insights on redundancy patterns found in real-world disaster area photos, we demonstrate that CARE can detect the semantic similarity between photos in the networking layer, thus reducing redundant transfers and improving buffer utilization. Using DTN simulations, we explore the boundaries of the usefulness of deploying CARE on a damaged network, and show that CARE can reduce packet delivery times and drops, and enables 20-40% more unique information to reach the rescue teams outside the disaster area than when CARE is not deployed

    Why Spectral Normalization Stabilizes GANs: Analysis and Improvements

    Full text link
    Spectral normalization (SN) is a widely-used technique for improving the stability and sample quality of Generative Adversarial Networks (GANs). However, there is currently limited understanding of why SN is effective. In this work, we show that SN controls two important failure modes of GAN training: exploding and vanishing gradients. Our proofs illustrate a (perhaps unintentional) connection with the successful LeCun initialization. This connection helps to explain why the most popular implementation of SN for GANs requires no hyper-parameter tuning, whereas stricter implementations of SN have poor empirical performance out-of-the-box. Unlike LeCun initialization which only controls gradient vanishing at the beginning of training, SN preserves this property throughout training. Building on this theoretical understanding, we propose a new spectral normalization technique: Bidirectional Scaled Spectral Normalization (BSSN), which incorporates insights from later improvements to LeCun initialization: Xavier initialization and Kaiming initialization. Theoretically, we show that BSSN gives better gradient control than SN. Empirically, we demonstrate that it outperforms SN in sample quality and training stability on several benchmark datasets.Comment: 54 pages, 74 figure
    • …
    corecore